Competency Level versus Level of Competency: The Field Evaluation Dilemma

نویسنده

  • Robin L. Ringstad
چکیده

This study examines the use of a competency-based scoring rubric to measure students’ field practicum performance and competency development. Rubrics were used to complete mid-year and final evaluations for 56 MSW students in their foundation field practicum. Results indicate that students scored higher than expected on competency development measures, appearing to provide evidence of good overall program outcomes in terms of competency levels achieved by students. Results also appear to provide evidence of grade inflation by field instructors, however, calling into question whether students have actually gained adequate skills to engage in competent social work practice. Introduction According to the Council on Social Work Education (CSWE) Educational Policy and Accreditation Standards (EPAS) (2008), field education is the signature pedagogy in social work education and, as such, “represents the central form of instruction and learning in which [... the] profession socializes its students to perform the role of practitioner” (p. 8). The role of the field practicum as a fundamental educational tool for professional practice is long-standing and widely accepted (Bogo, et al., 2004; Fortune, McCarthy, & Abramson, 2001; Sherer, & Peleg-Oren, 2005; Tapp, Macke, & McLendon, 2012). In fact, student performance in the field internship is often viewed as “the most critical checkpoint for entry to the profession” (Sowbel, 2011, p. 367). Yet, in spite of the centrality of field education to the preparation of social workers, little is known about how, what, and how well students learn professional skills through their field education experiences and how competent they are in performing as professionals at the completion of their field internships. Within the field education arena, much attention has been given to best practices for managing field education programs and to developing, planning, facilitating, and evaluating field placements. Similarly, evaluation of student performance has received wide attention. While “evaluations of student performance in field are of unquestionable importance in social work education [...and] Competency Level versus Level of Competency: The Field Evaluation Dilemma Author(s) Robin L. Ringstad, PhD California State University, Stanislaus Volume 3.2 | Fall 2013 | Field Scholar | © October 2013 | fieldeducator.simmons.edu 2 Competency Level versus Level of Competency serve as the primary means of assessing student competence in performing practice roles,” the difficulties of such evaluations have been well documented (Garcia & Floyd, 2002; Holden, Meenaghan, & Anastas, 2003; Raskin, 1994; Reid, Bailey-Dempsey, & Viggiani, 1996, p. 45; Sowbel, 2012, p. 35; Valentine, 2004). Evaluation of social work practice performance is complex and subjective, and it is often challenging to identify clear standards from which to assess performance (Widerman, 2003). Little evidence exists regarding the reliability and validity of field practicum evaluation methods “in discriminating the varying levels of social work competence in [...] students” (Regehr, Bogo, Regehr, & Power, 2007, p. 327). Prior authors have pointed out that the characteristics which make a student unsuitable for social work practice often first become evident in the field practicum (LaFrance, Gray, & Herbert, 2004; Moore, Dietz & Jenkins, 1998). “Given the reality that not all students will meet necessary professional standards,” one would expect that field education would be the place where students are likely to be screened out of the profession (LaFrance, et al., 2004, p. 326). Yet, prior literature indicates that it is rare for students to be evaluated as inadequate in field internships (Cole & Lewis, 1993; Fortune, 2003; Sowbel, 2011). In fact, many hold that field performance ratings are often inflated, as evidenced by the uniformly high ratings for the great majority of students (Bogo, Regehr, Hughes, Power, & Globerman, 2002; Raskin, 1994; Regehr, et al., 2007). Developing strategies to fairly and accurately evaluate field performance is key to demonstrating student competency development in social work education programs and to ensuring that graduates possess an adequate level of competency to engage in social work practice. Competency Assessment Current CSWE accreditation standards require the assessment of students in both the field practicum and the classroom to ensure student proficiency on core competencies. Competencies are operationalized through the identification and measurement of practice behaviors, and accredited social work programs are required to measure student outcomes in each competency area. CSWE’s ten core competencies, as well as an abbreviated title or category used in the current study to indicate each competency, are presented in Table 1. A variety of methods have been used for assessing student performance in field education. Examples include measuring interpersonal and practice skills, using self-efficacy scales, examining student and/or client satisfaction scores, and completing competency-based evaluations (Tapp, et al., 2012). Tapp, et al. (2012) discuss the importance of distinguishing between assessing students’ practice (a client-focused concept) and assessing students’ learning (a student-focused concept). Tapp, et al. indicate that the demonstration of competencies and practice behaviors in field education is best related to a student-focused assessment of learning. Measurement of students’ actual performance via the use of competency-based tools is of particular relevance in social work due to CSWE’s focus on competency-based education. 3 Competency Level versus Level of Competency There are two main types of competency-based measures: tools that measure theoretical knowledge within the competencies and tools that assess students’ abilities to perform competency-based behaviors, skills, and tasks (Tapp, et al., 2012). Knowledge, values, and skills are all components of competency. Field practicum, however, is most explicitly intended to address the performance of competency-based behaviors in practice. It is critical, therefore, that students’ performance-based competency be evaluated. Direct evaluation of discrete practice behaviors represents a way for social work programs to demonstrate the incorporation of competencies into the field practicum and to gather data on students’ mastery of those competencies. Purpose and Research Questions The purpose of the current study was to explore the use of a particular evaluation method for assessing student performance-based competency development in field practicum. The study was guided by a series of research questions: (a) Was the field evaluation tool and scoring rubric useful for measuring student performance-based competency in the ten competency areas? (b) Did student performance in the field practicum meet the outcome (benchmark) levels deemed acceptable by the Masters of Social Work (MSW) Program? (c) Did field evaluations differentiate between students’ performance levels from mid-year to final? (d) What student or program factors were related to student performance scores? The research focused on the foundation (first) year field practicum in an MSW program. Examination of the foundation year was chosen because foundation practice behaviors are specifically delineated by CSWE and are, therefore, consistent across social work programs. Advanced-year practice behaviors, in contrast, are delineated by each individual social work program depending upon their own concentrations or specializations. Since advanced practice behaviors are unique to particular programs, analysis of student progress on the competencies could be impacted by the particular behaviors being measured rather than the measurement tool, and results would not be generalizable. For the foundation-year, CSWE’s core competencies include 41 practice behaviors. It is assumed that the practice behaviors serve as indicators of the competency to which they are related (construct validity) and that adequate performance scores on the competency indicate the ability to perform as a competent practitioner (criterion-related validity). Assessment of construct and criterion-related validity was beyond the scope of the current study. Face validity of the evaluation tool was assured, however, with the use of CSWE-mandated competencies and practice behaviors as the items measured in the current study. Agency field instructors were charged with completing student performance assessments. Field instructors were provided training on competency-based education, CSWE’s competencies and practice behaviors, and a scoring rubric used to rate students’ performance (see Figure 1). Students, field 4 Competency Level versus Level of Competency instructors, and assigned faculty liaisons collaborated at the beginning and throughout the placement to identify specific field assignments that included the practice behaviors. While the practice behaviors being measured were the same for all students, the method of teaching and learning (and the specific tasks students engaged in) varied for each student based on the individual learning plans and field assignments. Ongoing consultation was available to field instructors throughout the practicum via faculty liaisons and the MSW program field director. Procedures All foundation-year MSW students (N = 56) were assigned to a field placement, and all students developed a field practicum learning plan specifying activities they would engage in to practice and master each of the competencies. Learning plans were developed in collaboration with field instructors and faculty liaisons. All students were informed of the competencies and practice behaviors and were advised of the field evaluation process. Field evaluations of students’ performance were completed by field instructors half-way through the field placement (mid-year) and again at the end of the field placement (final). Each student was rated on performance of each of the practice behaviors using the evaluation tool and scoring rubrics. Possible scores on each item ranged from 1 (significantly below expectations) to 5 (significantly exceeds expectations), with the expected score for most students being a 3 (meets expectations). Descriptors (anchor language) for each numerical score differed from the mid-year evaluation rubric to the final evaluation rubric, representing an expectation that while students’ skill levels were expected to increase over the course of the placement, the numerical rating for most students would continue to be a 3 (meets expectations) for the final evaluation. The scoring rubrics had good evidence of internal reliability with a Cronbach alpha of .94 for mid-term scores and .95 for final scores. Figure 1 shows the complete scoring rubric. All data from mid-year and final field evaluations were entered into SPSS to analyze results. Descriptive statistics were used to examine results on demographic characteristics and on individual practice behaviors. Practice behaviors related to particular competencies were subsequently combined to determine composite scores representing students’ proficiency level on each competency. Bivariate analyses were used to examine differences between groups based on demographic characteristics and to explore relationships between mid-year and final evaluation scores. During the course of this study, identifiable student and field instructor data were also collected. In this way, data collected served a dual purpose of contributing to efforts to gather student-specific outcome data as well as program-level assessment data. The student-specific data were used to inform program efforts to evaluate and address particular learning needs of individual students and to inform the program about the learning opportunities in particular placement agencies. Program-level aggregate data were used in the current study to explore the evaluation tool and scoring rubric being used and to answer the research questions guiding the study. Results reported in this study relate to program-level 5 Competency Level versus Level of Competency

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Evaluating the Students Assessment in Residency Program Based on Competency Based Approach

Objectives: The present study aimed to evaluate the evaluation process in educational groups based on a competency-based approach. method: In this study, the evaluation of the program is based on a competency-based approach. For this purpose, the first stage of the evaluation was formulated as a (work sheet). Then, Blurinett developed a comprehensive assessment with an emphasis on a competenc...

متن کامل

Evaluation of Clinical Competence and its Relationship with the Dimensions of Competence of Operating Room Technologists

Introduction: Today advances in surgery and technology have made surgical technologists more in need of professional advancement in their work, since the presence of unskilled technologists in the health team can lead to a threat to public health. The present study aimed to investigate the scope of ability and its relationship with the clinical competence of operating room technologists at Iran...

متن کامل

The Effectiveness of the experienced curriculum in developing of students’ cultural competency in Shiraz University of Medical Sciences

Context and goals: cultural competence is considered as the main and basic component of professionalism and has unique and special place in the medical sciences due to the cultural diversity of its clients. The goal of this research was surveying of the effectiveness of the experienced curriculum in developing of students’ cultural competency in Shiraz University of Medical Sciences.  Methods:...

متن کامل

Proposal for a Modified Dreyfus and Miller Model with simplified competency level descriptions for performing self-rated surveys

In competency-based education, it is important to frequently evaluate the degree of competency achieved by establishing and specifying competency levels. To self-appraise one's own competency level, one needs a simple, clear, and accurate description for each competency level. This study aimed at developing competency stages that can be used in surveys and conceptualizing clear and precise comp...

متن کامل

Organizing “Nursing Mentors Committee”: an Effective Strategy for Improving Novice Nurses’ Clinical Competency

Introduction: One of the most significant problems in clinical environment is the unskilled and inexperienced nurses. This is while, most managers are not aware of nurses' proficiency and competency level. Therefore, applying the new strategy of "organizing nursing mentors committee" by managers as well as their orientation in this regard could be considered as a strategy to improve clinical co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014